If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?
This line of conversation could benefit from some specificity and from defining what you mean by “explicitly friendly” and “safe”. I think of FAI as AI for which there is good evidence that it is safe, so for me, whenever there are multiple AGIs to choose between, the Friendliest one is the lowest-risk one, regardless of who developed it and why. Do you agree? If not, what specifically are you saying?
This line of conversation could benefit from some specificity and from defining what you mean by “explicitly friendly” and “safe”. I think of FAI as AI for which there is good evidence that it is safe, so for me, whenever there are multiple AGIs to choose between, the Friendliest one is the lowest-risk one, regardless of who developed it and why. Do you agree? If not, what specifically are you saying?